10 research outputs found

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    Empowering Patient Similarity Networks through Innovative Data-Quality-Aware Federated Profiling

    Get PDF
    Continuous monitoring of patients involves collecting and analyzing sensory data from a multitude of sources. To overcome communication overhead, ensure data privacy and security, reduce data loss, and maintain efficient resource usage, the processing and analytics are moved close to where the data are located (e.g., the edge). However, data quality (DQ) can be degraded because of imprecise or malfunctioning sensors, dynamic changes in the environment, transmission failures, or delays. Therefore, it is crucial to keep an eye on data quality and spot problems as quickly as possible, so that they do not mislead clinical judgments and lead to the wrong course of action. In this article, a novel approach called federated data quality profiling (FDQP) is proposed to assess the quality of the data at the edge. FDQP is inspired by federated learning (FL) and serves as a condensed document or a guide for node data quality assurance. The FDQP formal model is developed to capture the quality dimensions specified in the data quality profile (DQP). The proposed approach uses federated feature selection to improve classifier precision and rank features based on criteria such as feature value, outlier percentage, and missing data percentage. Extensive experimentation using a fetal dataset split into different edge nodes and a set of scenarios were carefully chosen to evaluate the proposed FDQP model. The results of the experiments demonstrated that the proposed FDQP approach positively improved the DQ, and thus, impacted the accuracy of the federated patient similarity network (FPSN)-based machine learning models. The proposed data-quality-aware federated PSN architecture leveraging FDQP model with data collected from edge nodes can effectively improve the data quality and accuracy of the federated patient similarity network (FPSN)-based machine learning models. Our profiling algorithm used lightweight profile exchange instead of full data processing at the edge, which resulted in optimal data quality achievement, thus improving efficiency. Overall, FDQP is an effective method for assessing data quality in the edge computing environment, and we believe that the proposed approach can be applied to other scenarios beyond patient monitoring

    Trust enforcement through self-adapting cloud workflow orchestration

    Get PDF
    Providing runtime intelligence of a workflow in a highly dynamic cloud execution environment is a challenging task due the continuously changing cloud resources. Guaranteeing a certain level of workflow Quality of Service (QoS) during the execution will require continuous monitoring to detect any performance violation due to resource shortage or even cloud service interruption. Most of orchestration schemes are either configuration, or deployment dependent and they do not cope with dynamically changing environment resources. In this paper, we propose a workflow orchestration, monitoring, and adaptation model that relies on trust evaluation to detect QoS performance degradation and perform an automatic reconfiguration to guarantee QoS of the workflow. The monitoring and adaptation schemes are able to detect and repair different types of real time errors and trigger different adaptation actions including workflow reconfiguration, migration, and resource scaling. We formalize the cloud resource orchestration using state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use validation model checker to validate our model in terms of reachability, liveness, and safety properties. Extensive experimentation is performed using a health monitoring workflow we have developed to handle dataset from Intelligent Monitoring in Intensive Care III (MIMICIII) and deployed over Docker swarm cluster. A set of scenarios were carefully chosen to evaluate workflow monitoring and the different adaptation schemes we have implemented. The results prove that our automated workflow orchestration model is self-adapting, self-configuring, react efficiently to changes and adapt accordingly while supporting high level of Workflow QoS

    A Load Balancing Algorithm Using Radio Signal Strength and Sensitivity for the Multipath Source Routing Protocol

    No full text
    On-demand reactive routing protocols, such as DSR, are commonly used for MANETs. Multipath Source Routing (MSR) provides DSR with multiple paths to a single destination which o#ers more network resource utilization. Applying a weighted load balancing algorithm in MSR improves the performance of routing in MANET by reducing the end-to-end delay since it depends on the round trip time (RTT) delay as a factor to distribute tra#c on available routes. However, this solution does not account for signal strength which is a major cause of packet errors in wireless networks. This thesis proposes a new route weighing mechanism that makes the MSR more adaptive to network errors resulting from loss of signal strength. The new strategy is to collect network information regarding the signal strength and the nodes' sensitivity to form a parameter called Received Power and Sensitivity (RPS), and use it as a weighing parameter with two flavors. The first evaluates the minimum RPS value among the legs between hops in each route. The second collects RPS cumulatively from all the legs in each route. Simulation results show that the new mechanism reduces the packet drop rate, and enhances the throughput with minimal additional overhead

    Multi-model deep learning for cloud resources prediction to support proactive workflow adaptation

    No full text
    2019 IEEE. Scientific workflows are complex, resource intensive, dynamic in nature and require elastic cloud resources. To support these requirements, cloud resources\u27 prediction schemes forecast resource scarcity and therefore support proactive workflow adaptation. In this paper, we propose a proactive workflow adaptation approach supported by a Deep Learning based prediction of cloud resources\u27 usage. The model uses an algorithm to evaluate and privilege the most appropriate prediction model for resource utilization violations for each task of the workflow. Then, it recommends the proper adaptation actions to maintain the Quality of Service (QoS) for the entire workflow. Runtime monitoring of cloud resources data is continuously fed into Machine Learning models including GRU, LSTM, and Bi-directional LSTM for predicting the future task resource utilization values. The algorithm evaluates the resources\u27 prediction using a number of metrics, such as RMSE, MAE, and MAPE. The prediction model achieving the highest accuracy is selected to determine the needed cloud resources. We conducted a series of experiments to evaluate our approach and the results demonstrate that the proposed Multi-Model predicts properly the cloud resource usage as well as suggesting their adaptation actions to guarantee the required workflow QoS

    Deep learning approach to security enforcement in cloud workflow orchestration

    No full text
    Abstract Supporting security and data privacy in cloud workflows has attracted significant research attention. For example, private patients’ data managed by a workflow deployed on the cloud need to be protected, and communication of such data across multiple stakeholders should also be secured. In general, security threats in cloud environments have been studied extensively. Such threats include data breaches, data loss, denial of service, service rejection, and malicious insiders generated from issues such as multi-tenancy, loss of control over data and trust. Supporting the security of a cloud workflow deployed and executed over a dynamic environment, across different platforms, involving different stakeholders, and dynamic data is a difficult task and is the sole responsibility of cloud providers. Therefore, in this paper, we propose an architecture and a formal model for security enforcement in cloud workflow orchestration. The proposed architecture emphasizes monitoring cloud resources, workflow tasks, and the data to detect and predict anomalies in cloud workflow orchestration using a multi-modal approach that combines deep learning, one class classification, and clustering. It also features an adaptation scheme to cope with anomalies and mitigate their effect on the workflow cloud performance. Our prediction model captures unsupervised static and dynamic features as well as reduces the data dimensionality, which leads to better characterization of various cloud workflow tasks, and thus provides better prediction of potential attacks. We conduct a set of experiments to evaluate the proposed anomaly detection, prediction, and adaptation schemes using a real COVID-19 dataset of patient health records. The results of the training and prediction experiments show high anomaly prediction accuracy in terms of precision, recall, and F1 scores. Other experimental results maintained a high execution performance of the cloud workflow after applying adaptation strategy to respond to some detected anomalies. The experiments demonstrate how the proposed architecture prevents unnecessary wastage of resources due to anomaly detection and prediction

    A Novel Patient Similarity Network (PSN) Framework Based on Multi-Model Deep Learning for Precision Medicine

    No full text
    Precision medicine can be defined as the comparison of a new patient with existing patients that have similar characteristics and can be referred to as patient similarity. Several deep learning models have been used to build and apply patient similarity networks (PSNs). However, the challenges related to data heterogeneity and dimensionality make it difficult to use a single model to reduce data dimensionality and capture the features of diverse data types. In this paper, we propose a multi-model PSN that considers heterogeneous static and dynamic data. The combination of deep learning models and PSN allows ample clinical evidence and information extraction against which similar patients can be compared. We use the bidirectional encoder representations from transformers (BERT) to analyze the contextual data and generate word embedding, where semantic features are captured using a convolutional neural network (CNN). Dynamic data are analyzed using a long-short-term-memory (LSTM)-based autoencoder, which reduces data dimensionality and preserves the temporal features of the data. We propose a data fusion approach combining temporal and clinical narrative data to estimate patient similarity. The experiments we conducted proved that our model provides a higher classification accuracy in determining various patient health outcomes when compared with other traditional classification algorithms

    Trends, Technologies, and Key Challenges in Smart and Connected Healthcare

    No full text
    Cardio Vascular Diseases (CVD) is the leading cause of death globally and is increasing at an alarming rate, according to the American Heart Association's Heart Attack and Stroke Statistics-2021. This increase has been further exacerbated because of the current coronavirus (COVID-19) pandemic, thereby increasing the pressure on existing healthcare resources. Smart and Connected Health (SCH) is a viable solution for the prevalent healthcare challenges. It can reshape the course of healthcare to be more strategic, preventive, and custom-designed, making it more effective with value-added services. This research endeavors to classify state-of-the-art SCH technologies via a thorough literature review and analysis to comprehensively define SCH features and identify the enabling technology-related challenges in SCH adoption. We also propose an architectural model that captures the technological aspect of the SCH solution, its environment, and its primary involved stakeholders. It serves as a reference model for SCH acceptance and implementation. We reflected the COVID-19 case study illustrating how some countries have tackled the pandemic differently in terms of leveraging the power of different SCH technologies, such as big data, cloud computing, Internet of Things, artificial intelligence, robotics, blockchain, and mobile applications. In combating the pandemic, SCH has been used efficiently at different stages such as disease diagnosis, virus detection, individual monitoring, tracking, controlling, and resource allocation. Furthermore, this review highlights the challenges to SCH acceptance, as well as the potential research directions for better patient-centric healthcare
    corecore